55 research outputs found

    Using registries to integrate bioinformatics tools and services into workbench environments

    Get PDF
    The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them. The descriptions are, however, hard and expensive to create and maintain, because they are volatile and require an exhaustive knowledge of the described resource, its applicability to biological research, and the data model and syntax used to describe it. We present here the Workbench Integration Enabler, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry

    Analysis of the Federal Aviation Administration\u27s Small UAS Regulations for Hobbyist and Recreational Users

    Get PDF
    Widespread proliferation of small Unmanned Aerial Systems (sUAS), particularly those used for hobby and recreational purposes, have become a growing problem for the Federal Aviation Administration (FAA) in recent years. Reports of aircraft and sUAS near misses are on the rise and several similar anecdotes of ground injuries and property damage can be traced back to sUAS operations. This study sought to explore recent regulatory and policy initiatives in place to deter unsafe hobby and recreational sUAS use and hold operators accountable for hazardous sUAS activities. Using document analysis, case study, and conceptual analysis methodology, researchers analyzed 40 information sources. The study evaluated the FAA’s sUAS registration policy, current agency civil enforcement guidance, and the potential for criminal prosecution or civil liability. The study specifically addresses the risk to pilots for certificate suspension or revocation and evaluates the applicability of the Aviation Safety Reporting System (ASRS) for mitigating FAA enforcement actions against recreational sUAS operators. Finally, the study examines the potential liability incurred by sUAS operations and the applicability of various insurance policies

    Tools and data services registry: a community effort to document bioinformatics resources

    Get PDF
    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR––the European infrastructure for biological information––that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools.publishedVersio

    JB-300: An advanced medium size transport for 2005

    Get PDF
    In the fall of 1992, the TAC Team was presented with a Request for Proposal (PFP) for a mid-size (250-350 passenger) commercial transport. The aircraft was to be extremely competitive in the areas of passenger comfort, performance, and economic aspects. Through the use of supercritical airfoils, a technologically advanced Very High By-pass Ratio (VHBR) turbofan engine, a low overall drag configuration, a comparable interior layout, and mild use of composites, the JB-300 offers an economically viable choice to the airlines. The cents per passenger mile of the JB-300 is 1.76, which is considerably lower than current aircraft in the same range. Overall, the JB-300 is a technologically advanced aircraft, which will meet the demands of the 21st century

    Perspectives on automated composition of workflows in the life sciences [version 1; peer review: 2 approved]

    Get PDF
    Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus. This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the “big picture” of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years. A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future.Stian Soiland-Reyes was supported by BioExcel-2 Centre of Excellence, funded by European Commission Horizon 2020 programme under European Commission contract H2020-INFRAEDI-02-2018 823830. Carole Goble was supported by EOSC-Life, funded by European Commission Horizon 2020 programme under grant agreement H2020-INFRAEOSC-2018-2 824087. We gratefully acknowledge the financial support from the Lorentz Center, ELIXIR, and the Leiden University Medical Center (LUMC) that made the workshop possible. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscriptPeer Reviewed"Article signat per 33 autors/es: Anna-Lena Lamprecht , Magnus Palmblad, Jon Ison, Veit SchwĂ€mmle , Mohammad Sadnan Al Manir, Ilkay Altintas, Christopher J. O. Baker, Ammar Ben Hadj Amor, Salvador Capella-Gutierrez, Paulos Charonyktakis, Michael R. Crusoe, Yolanda Gil, Carole Goble, Timothy J. Griffin , Paul Groth , Hans Ienasescu, Pratik Jagtap, MatĂșĆĄ KalaĆĄ , Vedran Kasalica, Alireza Khanteymoori , Tobias Kuhn12, Hailiang Mei, HervĂ© MĂ©nager, Steffen Möller, Robin A. Richardson, Vincent Robert9, Stian Soiland-Reyes, Robert Stevens, Szoke Szaniszlo, Suzan Verberne, Aswin Verhoeven, Katherine Wolstencroft "Postprint (published version

    Using bio.tools to generate and annotate workbench tool descriptions

    Get PDF
    Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata

    JIB.tools 2.0 – A Bioinformatics Registry for Journal Published Tools with Interoperability to bio.tools

    Get PDF
    JIB.tools 2.0 is a new approach to more closely embed the curation process in the publication process. This website hosts the tools, software applications, databases and workflow systems published in the Journal of Integrative Bioinformatics (JIB). As soon as a new tool-related publication is published in JIB, the tool is posted to JIB.tools and can afterwards be easily transferred to bio.tools, a large information repository of software tools, databases and services for bioinformatics and the life sciences. In this way, an easily-accessible list of tools is provided which were published in JIB a well as status information regarding the underlying service. With newer registries like bio.tools providing these information on a bigger scale, JIB.tools 2.0 closes the gap between journal publications and registry publication. (Reference: https://jib.tools)

    EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats

    Get PDF
    Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl.publishedVersio

    EDAM-bioimaging : The ontology of bioimage informatics operations, topics, data, and formats

    Get PDF
    International audienceThe ontology of bioimage informatics operations, topics, data, and formats What? EDAM-bioimaging is an extension of the EDAM ontology, dedicated to bioimage analysis, bioimage informatics, and bioimaging. Why? EDAM-bioimaging enables interoperable descriptions of software, publications, data, and workflows, fostering reliable and transparent science. How? EDAM-bioimaging is developed in a community spirit, in a welcoming collaboration between numerous bioimaging experts and ontology developers. How can I contribute? We need your expertise! You can help by reviewing parts of EDAM-bioimaging, posting comments with suggestions, requirements, or needs for clarification, or participating in a Taggathon or another hackathon. Please see https://github.com/edamontology/edam-bioimaging#contributing. EDAM-bioimaging is developed in an interdisciplinary open collaboration supported by the hosting institutions, participating individuals, and NEUBIAS COST Action (CA15124) and ELIXIR-EXCELERATE (676559) funded by the Horizon 2020 Framework Programme of the European Union. https://github.com/edamontology/edam-bioimaging @edamontology /edamontology/edam-bioimagin

    Towards FAIR principles for research software

    Get PDF
    The FAIR Guiding Principles, published in 2016, aim to improve the findability, accessibility, interoperability and reusability of digital research objects for both humans and machines. Until now the FAIR principles have been mostly applied to research data. The ideas behind these principles are, however, also directly relevant to research software. Hence there is a distinct need to explore how the FAIR principles can be applied to software. In this work, we aim to summarize the current status of the debate around FAIR and software, as basis for the development of community-agreed principles for FAIR research software in the future. We discuss what makes software different from data with regard to the application of the FAIR principles, and which desired characteristics of research software go beyond FAIR. Then we present an analysis of where the existing principles can directly be applied to software, where they need to be adapted or reinterpreted, and where the definition of additional principles is required. Here interoperability has proven to be the most challenging principle, calling for particular attention in future discussions. Finally, we outline next steps on the way towards definite FAIR principles for research software
    • 

    corecore